forked from apache/tvm
-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
update upstream 20190626 #33
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
wweic
commented
Jun 26, 2019
…to HalideIR submodule (apache#3169)
* check in * use groups * CHECK_EQ * trigger CI * Update mac_count.cc * trigger CI * trigger CI
* Register and use custom datatypes in TVM This patch adds the ability to register and use a custom datatype from Python, using the `register_datatype` call. The datatype can then be passed as the `dtype` parameter using the syntax `dtype="custom[<type_name>]bitsxlanes"`. * Removes extra file * Register custom datatypes with TVM; specify Cast and Add lowering This commit adds functionality for registering custom datatypes with TVM, and furthermore adding custom lowering functions to lower those custom datatypes. This commit only adds lowering for the Cast and Add ops; more ops will be added soon. Check out some custom datatype samples in my repository of samples: https://github.com/gussmith23/tvm-custom-datatype-samples * Register and lower casts from Python * Formatting * Fix include; was including too much * Add comment * Add DatatypeRegistered * Add storage size field to custom datatypes This field indicates the bitwidth of the opaque block of data into which instances of the datatype will be stored, when TVM compiles. For example, if I create a datatype with a storage size of 16, then - Constants of that datatype will be created as unsigned 16-bit ints - Calls to external functions taking that datatype will pass the data as unsigned 16-bit ints - External functions returning that datatype will be assumed to return unsigned 16-bit ints. * Change how lowering funcs (Cast and other ops) are named in registry tvm.datatypes.lower.<target>.cast.<dst-type>.<src-type> becomes tvm.datatypes.lower.<target>.Cast.<dst-type>.<src-type> And fixes some sloppy code around how the other ops were being formatted. * Update Python register_datatype to accept storage size * Oops, left out one cast->Cast change * Look up storage size when parsing `custom[typename]` When we encounter this type string in Python, it will be parsed into a Halide type object in C++. Some of my original code supported this parsing, but we now have to attach the storage type to the type (by setting the bits field). * Change how external calls for casting/other ops are done Firstly, we now use the storage size of the custom type when determining input/output types; e.g. a cast to a custom type with storage size 16 is seen as a call to an external function returning an opaque uint of size 16. Secondly, write a macro to handle the other ops. Originally I thought I could handle these at runtime, with a single `_register_op` global. I transitioned instead to using individual `_register_Add` etc. calls generated with a macro, but I don't remember why. * When encountering a custom type immediate, generate UIntImm * Translate custom types to LLVM type * Generate correct return type in Casts Originally I was assuming that the result type from casts was always a custom datatype, and so I was making the Call return a UInt type. * Use TVM-idiomatic recursion style in DatatypesLowerer This was actually a bug, I'm pretty sure; we wouldn't have recursed deep on any complex programs. As a result of making this change, I also uncovered another potential bug, where the datatypes lowering pass would attempt to lower a Load of a custom type. By commenting out the `Mutate_` for Load, I was able to stop the error from cropping up, but frankly, I'm not satisfied with the solution; how is it that we are able to run codegen when Loads of custom datatypes are present in the IR? I have not written any code, to my knowledge, that will support this. Perhaps Load does not care about the underlying datatype? * Use CHECK * Add comment about which Mutate_s are needed * Add comments * Add GetCustomDatatypeRegistered as an extern C function * Formatting, comments, casting * Change how datatype string is formatted * Use bits() instead of GetStorageSize Use bits() instead of GetStorageSize * Change comment * Add datatype.py * Change registered function name (datatypes->datatype) * Remove GetStorageSize * Format custom datatypes like any other datatype Specifically, we now print the bits and lanes after the `custom[...]` string. * Correctly implement datatype lowering in Python * Remove unneeded include * Make function naming consistent * Use CHECK instead of internal_assert * Rename macro * Formatting * Rename functions * Implement Cast lowering `_datatype_register_op` is now able to lower both binary ops and Casts. * Formatting * Formatting * Clang format, google style * Fix std::string/extern "C" warnings * Formatting * Formatting * Lower Allocates and Loads during datatype lowering This should ensure that there are no custom datatypes remaining once datatype lowering is done. This will allow us to remove the code in the LLVM codegen which deals with custom datatypes. * Revert additions to codegen_llvm.cc which are now unneeded * Pass cpplint on lower_datatypes.cc * Add clarifying comment * Remove datatype lowering registration funcs from C++ * Add CHECKs * Remove TODO * Remove all references to storage size * Move and rename function * Rename function * Remove done TODOs and other handled comments * Remove irrelevant Load code and comments * Comment out the IR node types I'm not sure about yet * Add bfloat16 datatype unittest * Fix MakeConstScalar MakeConstScalar for a custom datatype will now call out to a function which can be registered on a per-datatype basis. The function will take a double and return the equivalent value in the custom datatype format. Note that these code paths are not actually used or tested at the moment. I have not yet written an example which uses const scalars of a custom datatype. * Formatting * Change pass name * Allow users to register whatever lowering function they want Tianqi pointed out that users should be able to register whatever lowering function they want, and should not be constrained to registering lowering functions which just call out to external libraries. I still provide a function for making lowering functions which call out to external libraries, for convenience. * Add clarifying comment * Remove unneeded comment * Remove unneeded function * Rename file * Undo unnecessary change * Undo unnecessary change * Make naming consistent Rename "datatypes" to "custom datatypes" in most contexts. * Revert an artifact of old code * Fix build warnings, add TODO * Lint * Remove unnecessary use of extern C by separating decl and impl * Error checking * Remove TODO * Missed a name change * Lint * Python lint * Correctly format datatype * Move bfloat16 to 3rdparty * "custom_datatypes" --> "datatype" in most places I left the pass as "LowerCustomDatatypes" to indicate that we're not lowering anything other than custom datatypes. Otherwise, everything else has been changed. * Upgrade datatype unittest I used a float calculator to generate some real testcases for the unittest. * Separate public includes and private implementation Specifically, create cleaner decoupling between datatypes stuff in packed_func and the datatype registry implementation. * Formatting * Limit custom datatype codes to >128 * Add TODOs * Fix comment * Formatting * Clean up datatype unittest * Remove un-exported functions in public headers; UIntImm->FloatImm More places where I accidentally was using implementation-only functions in public headers. Additionally, store custom datatype immediates as FloatImms. A later change will add new lowering logic to lower these FloatImms to UIntImms. Plus formatting change. * Lint * Use FloatImm (not UIntImm) to hold immediates of custom datatypes This change switches from using UIntImm to FloatImm for storing immediates of custom datatypes. The value of the number is stored in a double, which should be enough precision for now, for most custom types we will explore in the immediate future. In line with this change, we change the datatype lowering so that FloatImms are lowered to UInts of the appropriate size. Originally, this was going to be done by allowing the user to register a double->uint_<storage size>_t conversion which would be called at compile time to convert the value from the FloatImm to a UInt and store it in a UIntImm. After discussions with Tianqi, we decided to take the simpler route, and lower FloatImms just as we lower all other ops: by replacing them with Call nodes. In this case, presumably the user will Call out to a conversion function in their datatype library. The justification for this decision is due to the functionality added in apache#1486. This pull request adds the ability to load LLVM bytecode in at compile time. This applies in our case as follows: 1. The user writes their custom datatype programs and registers their lowering functions in the same way we've been doing it so far. All operations over custom datatypes are lowered to Calls to the datatype library. 2. The user compiles their datatype library to LLVM bytecode. 3. At TVM compile time, the user loads the LLVM bytecode. Depending on how the datatype library is written, Clang should be able to perform constant folding over the custom datatype immediates, even if their conversions are done with calls to the library. Additionally adds test to test the FloatImm codepath. * Re-add a change I removed accidentally during rebase * Cleanup * Remove unnecessary TVM_DLLs * Add custom datatype utilities source file to Go runtime pack * Revert "Remove unnecessary TVM_DLLs" This reverts commit 4b742b9. * Mark bfloat code as TVM_DLL * Moves custom datatype runtime utilities to c_runtime_api.cc * Revert "Add custom datatype utilities source file to Go runtime pack" This reverts commit aecbcde. * Move datatype parsing to its own function * Change comments * Remove unneeded function * Formatting * Formatting * Documentation * Add kCustomBegin, use it for checking for custom types * Documentation * Formatting * Move static definition to implementation * Remove comment * Decide toBeLowered before lowering arguments of Expr In the past, e.g. when lowering custom datatypes for an Add, we would lower a and b first, and then decide whether the resulting new Add needed to be lowered based on the (new) types of a and b. Now, instead, we need to check the types of a and b first (to see if they're custom types), and then lower them (so they'll become non-custom types), and then lower the new Add. * Revert "Move datatype parsing to its own function" This reverts commit d554a58. This broke parsing. Will figure this out later. There isn't a really clean way to separate this out given how the rest of the function is written. * Replace comment * Documentation * Remove comment and TVM_DLL * Better error messages * Remove artifact of rebase * Separate datatypes parsing to its own function * Add \returns * Comment changes; add TODO * Refactor tests
…che#3173) * Stashing for later maybe. * Added new option to leave specific layers unquantized. * Better error checking. * remove unneeded import * tab to spaces * pylint fixes * more pylint fixes
* Some bug fixes in tensorflow graph converter and added DepthToSpace operator. * Made DepthToSpace better comply with other function syntax. * Added better shape inference for unusual situations. * Lint fixes. * Added depthtospace test. * Added test cases for value inference and depthtospace. * Added fill testing. * Made comment changes and added BroadcastTo op and tests. * Fixed underlining and unneeded opt_level forcing. * Added _infer_value assertion that all values to infer are available in passed parameters.
Issue: One of existing illegal dependency check's condition always true, the correct logic actually should be such check for store and load. Solution: Fix the said logic issue.
…egistered as nodes (apache#3143)
* [Relay][TOPI] operator All * Update tests/python/frontend/tensorflow/test_forward.py Co-Authored-By: yongwww <55wuyong@163.com> * fix comments * change to level 4
…che#3204) * remove on_device op after annotation * Update src/relay/pass/device_annotation.cc Co-Authored-By: MORINAGA <34588258+imorinaga@users.noreply.github.com>
* Convert TFLite NCHW to NHWC * Minor comment fix
Issue: when using vivado compile vta.cc with top function 'vta', vivado report deadlock error like '...with default size is used in a non -dataflow region, which may result in deadlock Please consider to resize the stream using the directive ‘set_directive_stream’ or the ‘HL S stream’ pragma.' Solution: give the queue a default size as 8.
* Add EtaExpand to transform API * Add test case
* update README * fix typo
* [QUANTIZE] Support for clip operator * [QUANTIZE] Memorizing the quantize node mapping. * [QUANTIZE] Remove use_stop_fusion and skip_k_conv in qconfig * update * update * update * update
* Support bidirectional RNN layer * tweak * tweak
Summary: In multi-threaded applications where we have multiple inferences on the same model in parallel (consider e.g. a TTS system handling multiple requests), it can be useful to share the parameters of a model amongst these multiple instances. This improves the cache utilization behaviour of the system, as multiple cores can use the same set of weights instead of evicting the identical copies of weights in a shared cache. As the underlying `NDArray` instances in `data_entry_` implement a ref-counted based sharing system, this is a simple modification of the `GraphRuntime::LoadParams` logic to instead copy parameters from an existing GraphRuntime instance. This is a little ugly in that we need both the pre-existing GraphRuntime instance, as well as the 'serialized' params (since we need to know the set of names we should copy), but without imposing additional assumptions (i.e. storing the set of param names in GraphRuntime, and enforcing that shared param names are identical to the parameters set in the preceding `LoadParams` call), this seems unavoidable. Test Plan: Unit test added.
… issue. (apache#3400) * [VTA] Add VTA PYNQ metal_test bitstream program logic and fix couple compile issue. Issue: VTAProgram not exist and cause compile error. No logic to program the bitstream into FPGA. metal test still use pynq 2.1 library which not support on latest pynq 2.4. Solution: remove old VTAProgram. when setting is pynq, program the bitstream during compile. change DMA link library to libcma. * Address review commends.
yongwww
approved these changes
Jun 27, 2019
trevor-m
pushed a commit
to trevor-m/tvm
that referenced
this pull request
Jul 27, 2020
…generating (apache#5962) * Code migration Start (neo-ai#1) * Init commit: Code migration Start * Add loop_state.cc/h * Add ComputeDAG basic test * Split transform_step out & Update more UTs (neo-ai#3) * Split transform_step out * Update GetProducers & GetConsumers * Update UTs * Add UT for CacheReadWrite & Some bug fix * Add search_task, measure and serialization (neo-ai#4) * Add FollowSplit & FollowFusedSplit tests * Update dag.InferBound & its UT * Add search_task, measure and serialization * Update Serialization UT * Add MetaTileRewritePolicy (neo-ai#5) * Add feature * Add cost_model, meta_tile_rewrite_policy * Add MetaTileRewritePolicy basic UT * Basic Python API for State (neo-ai#6) * Add Basic Python API for State * Add UTs for State * Add Python API: Measure & Task (neo-ai#7) * Update the return value of state operation * Add task * Copy measure.py & utils.py * Fix LocalBuilder * Fix LocalRunner * Add ansor.auto_schedule() API; First AutoSchedule working version(neo-ai#8) * Add basic Python support for ansor.auto_schedule * Update AutoSchedule API * Bug fix for get the attach point of a fused iter * Update UT after infer bug fix * Bug fix & Add python serialization API (neo-ai#10) * Delete C++ UT hack since Python is ready * Add ndarray.non_empty * Update Serialization python API * Improve code style, python wrapper and test cases (neo-ai#11) * Update c++ code style and unit test * Update python State wrapper and test cases * fix unit tests * Add RPCRunner & OpenCL/CUDA test (neo-ai#12) * Add RPCRunner & OpenCL search test * Add CUDA search test * Add RPCRunner test * rebase to upstream/master * Add Ansor basic tutorial (neo-ai#13) * Add basic tutorial * migrate feature extraction (neo-ai#14) * Add XGBModel & RPCRunnerWarpper (neo-ai#15) * Add XGBModel & RPCRunnerWarpper * Revert "Add Parallel Granularity Mutation" * Migrate workload_registry.py (neo-ai#16) * add workload registry * update * update * add task scheduler (neo-ai#17) * Add conv2d cuda tutorial with workload registry (neo-ai#18) * add tune_test.py (the old tune_wkl.py) (neo-ai#19) * add tune_test.py (the old tune_wkl.py) * update * fix measure * fix for gpu * Code refine for tune_test.py & Add a pre load callback (neo-ai#20) * Bug fix for tutorials * Add PreLoadMeasuredStates * Add search_callback support for task tuner * Code refine for tune_test.py * Update * Update * Update * Update * Bug fix * Add python custom sketch rule (neo-ai#21) * Add custom sketch rule * Bug fix * Ansor Relay Integration (without layout rewrite) (neo-ai#22) * relay integration * Add tune_op_subgraph.py & Some code clean for tune_network.py (neo-ai#23) * Add single op tune scripts * Add tune subgraph support * Merge all op & all subgraph to one file * Rename file * add explicit_unroll_max_extent (neo-ai#25) * Add Index simplification & API update (neo-ai#26) * Add vectorized cooperative_fetching test * Update math simplify for vectorized CF * File rename * Update tune_network * API update * Update PreLoadMeasuredStates & Some bug fix (neo-ai#27) * Add a threading wrapper to fix the test bug * Set default TVM_USE_AUTO_SCHEDULER to false * Update PreLoadMeasuredStates callback * Add tensorize step for loop_state (neo-ai#31) * Add tensorize step * State python api update (neo-ai#33) * Start to update api * Add compute_dag to state * API update * kernel layout rewrite (neo-ai#28) * kernel layout rewrite * remove some hacks * add defuse_ops pass and move kernel_layout_rewrite pass after fuse_ops pass * set TVM_RELAY_DISABLE_BUILD_CACHE for task extraction and prepare_layout_rewrite * [cache flush] port cache flush to ansor (neo-ai#32) * Improve relay integration (neo-ai#34) * tmp checkpoint * Improve relay integration * Improve relay integration * Fix xgb error & Simplify dispatcher (neo-ai#35) * Rename "MetaTileRewritePolicy" to "SketchPolicy". (neo-ai#36) * Rename "MetaTileRewritePolicy" to "SketchPolicy". * Add a new class for auto_unroll_max_step, storage_offset in StageNode * fix tune_op_subgraph.py * rebase * Migrate all node::make to noderef's construct function (neo-ai#37) * Start to move xxxnode::make to noderef() * Update * Update * Finish transform_step * Finish comute dag & auto schedule * Update * Update * Update * Update * Update * Code refine * Code refine * Code refine * Update * Update * Some lint fix & Recover the double constructor of tvm::PrimExpr (neo-ai#39) * lint fix * clang-format-fix * pylint fix * Update * Recover the double constructor of tvm::PrimExpr * Fix pylint * pylint fix * pylint fix * Add MutateComputeLocation and MutateParallel in evolutionary search (neo-ai#40) * Add MutateComputeLocation and MutateParallel in evolutionary search * fix lint * Improve loop state python API (stage_tensors -> stage_ops) (neo-ai#41) * improve loop state python API (stage_tensors -> stage_ops) * fix * ComputeDAG bug fix & Add Custom TensorCore Matmul Example (neo-ai#42) * Bug Fix * Sample example of Custom TensorCore Matmul * Rever Commits, Start to build minimum Ansor system * Code clean for minimum Ansor system * Bug fix & Delete AccessAnalyzer * Delete attachmap & Code clean * Doc update Update statenode::stages from vector to Array * Headfile update & Python doc update * clang-format fix * pylint fix * Update * Doc update * Update * Bug fix after code merge to the new master * clang-format fix * Update * Update * Update std::vector to Array; Update verbosity setting; Some commemts addressed * std::vector->Array & std::string->String * Add init_state to ComputeDAG * Update * Update some unordered_map to Map * clang-format fix * Comments addressed Delete ReplayAndInferBound Delete ReplaySteps & InferBoundCommon * Lint fix * Update * Update * Update * Update * Update * Update * Update * Update * Update * Rename ansor namespace to auto_schedule * Update * Rename ThreadPool to ParallelFor * Add parallel_for * Remove ThreadPool * Update python/tvm/auto_schedule/auto_schedule.py * trigger CI Co-authored-by: Lianmin Zheng <lianminzheng@gmail.com> Co-authored-by: Minmin Sun (孙敏敏) <minmin.smm@alibaba-inc.com> Co-authored-by: Zhao Wu <zhaowu@apache.org>
trevor-m
pushed a commit
to trevor-m/tvm
that referenced
this pull request
Aug 26, 2020
…generating (apache#5962) * Code migration Start (neo-ai#1) * Init commit: Code migration Start * Add loop_state.cc/h * Add ComputeDAG basic test * Split transform_step out & Update more UTs (neo-ai#3) * Split transform_step out * Update GetProducers & GetConsumers * Update UTs * Add UT for CacheReadWrite & Some bug fix * Add search_task, measure and serialization (neo-ai#4) * Add FollowSplit & FollowFusedSplit tests * Update dag.InferBound & its UT * Add search_task, measure and serialization * Update Serialization UT * Add MetaTileRewritePolicy (neo-ai#5) * Add feature * Add cost_model, meta_tile_rewrite_policy * Add MetaTileRewritePolicy basic UT * Basic Python API for State (neo-ai#6) * Add Basic Python API for State * Add UTs for State * Add Python API: Measure & Task (neo-ai#7) * Update the return value of state operation * Add task * Copy measure.py & utils.py * Fix LocalBuilder * Fix LocalRunner * Add ansor.auto_schedule() API; First AutoSchedule working version(neo-ai#8) * Add basic Python support for ansor.auto_schedule * Update AutoSchedule API * Bug fix for get the attach point of a fused iter * Update UT after infer bug fix * Bug fix & Add python serialization API (neo-ai#10) * Delete C++ UT hack since Python is ready * Add ndarray.non_empty * Update Serialization python API * Improve code style, python wrapper and test cases (neo-ai#11) * Update c++ code style and unit test * Update python State wrapper and test cases * fix unit tests * Add RPCRunner & OpenCL/CUDA test (neo-ai#12) * Add RPCRunner & OpenCL search test * Add CUDA search test * Add RPCRunner test * rebase to upstream/master * Add Ansor basic tutorial (neo-ai#13) * Add basic tutorial * migrate feature extraction (neo-ai#14) * Add XGBModel & RPCRunnerWarpper (neo-ai#15) * Add XGBModel & RPCRunnerWarpper * Revert "Add Parallel Granularity Mutation" * Migrate workload_registry.py (neo-ai#16) * add workload registry * update * update * add task scheduler (neo-ai#17) * Add conv2d cuda tutorial with workload registry (neo-ai#18) * add tune_test.py (the old tune_wkl.py) (neo-ai#19) * add tune_test.py (the old tune_wkl.py) * update * fix measure * fix for gpu * Code refine for tune_test.py & Add a pre load callback (neo-ai#20) * Bug fix for tutorials * Add PreLoadMeasuredStates * Add search_callback support for task tuner * Code refine for tune_test.py * Update * Update * Update * Update * Bug fix * Add python custom sketch rule (neo-ai#21) * Add custom sketch rule * Bug fix * Ansor Relay Integration (without layout rewrite) (neo-ai#22) * relay integration * Add tune_op_subgraph.py & Some code clean for tune_network.py (neo-ai#23) * Add single op tune scripts * Add tune subgraph support * Merge all op & all subgraph to one file * Rename file * add explicit_unroll_max_extent (neo-ai#25) * Add Index simplification & API update (neo-ai#26) * Add vectorized cooperative_fetching test * Update math simplify for vectorized CF * File rename * Update tune_network * API update * Update PreLoadMeasuredStates & Some bug fix (neo-ai#27) * Add a threading wrapper to fix the test bug * Set default TVM_USE_AUTO_SCHEDULER to false * Update PreLoadMeasuredStates callback * Add tensorize step for loop_state (neo-ai#31) * Add tensorize step * State python api update (neo-ai#33) * Start to update api * Add compute_dag to state * API update * kernel layout rewrite (neo-ai#28) * kernel layout rewrite * remove some hacks * add defuse_ops pass and move kernel_layout_rewrite pass after fuse_ops pass * set TVM_RELAY_DISABLE_BUILD_CACHE for task extraction and prepare_layout_rewrite * [cache flush] port cache flush to ansor (neo-ai#32) * Improve relay integration (neo-ai#34) * tmp checkpoint * Improve relay integration * Improve relay integration * Fix xgb error & Simplify dispatcher (neo-ai#35) * Rename "MetaTileRewritePolicy" to "SketchPolicy". (neo-ai#36) * Rename "MetaTileRewritePolicy" to "SketchPolicy". * Add a new class for auto_unroll_max_step, storage_offset in StageNode * fix tune_op_subgraph.py * rebase * Migrate all node::make to noderef's construct function (neo-ai#37) * Start to move xxxnode::make to noderef() * Update * Update * Finish transform_step * Finish comute dag & auto schedule * Update * Update * Update * Update * Update * Code refine * Code refine * Code refine * Update * Update * Some lint fix & Recover the double constructor of tvm::PrimExpr (neo-ai#39) * lint fix * clang-format-fix * pylint fix * Update * Recover the double constructor of tvm::PrimExpr * Fix pylint * pylint fix * pylint fix * Add MutateComputeLocation and MutateParallel in evolutionary search (neo-ai#40) * Add MutateComputeLocation and MutateParallel in evolutionary search * fix lint * Improve loop state python API (stage_tensors -> stage_ops) (neo-ai#41) * improve loop state python API (stage_tensors -> stage_ops) * fix * ComputeDAG bug fix & Add Custom TensorCore Matmul Example (neo-ai#42) * Bug Fix * Sample example of Custom TensorCore Matmul * Rever Commits, Start to build minimum Ansor system * Code clean for minimum Ansor system * Bug fix & Delete AccessAnalyzer * Delete attachmap & Code clean * Doc update Update statenode::stages from vector to Array * Headfile update & Python doc update * clang-format fix * pylint fix * Update * Doc update * Update * Bug fix after code merge to the new master * clang-format fix * Update * Update * Update std::vector to Array; Update verbosity setting; Some commemts addressed * std::vector->Array & std::string->String * Add init_state to ComputeDAG * Update * Update some unordered_map to Map * clang-format fix * Comments addressed Delete ReplayAndInferBound Delete ReplaySteps & InferBoundCommon * Lint fix * Update * Update * Update * Update * Update * Update * Update * Update * Update * Rename ansor namespace to auto_schedule * Update * Rename ThreadPool to ParallelFor * Add parallel_for * Remove ThreadPool * Update python/tvm/auto_schedule/auto_schedule.py * trigger CI Co-authored-by: Lianmin Zheng <lianminzheng@gmail.com> Co-authored-by: Minmin Sun (孙敏敏) <minmin.smm@alibaba-inc.com> Co-authored-by: Zhao Wu <zhaowu@apache.org>
trevor-m
pushed a commit
to trevor-m/tvm
that referenced
this pull request
Aug 26, 2020
…generating (apache#5962) * Code migration Start (neo-ai#1) * Init commit: Code migration Start * Add loop_state.cc/h * Add ComputeDAG basic test * Split transform_step out & Update more UTs (neo-ai#3) * Split transform_step out * Update GetProducers & GetConsumers * Update UTs * Add UT for CacheReadWrite & Some bug fix * Add search_task, measure and serialization (neo-ai#4) * Add FollowSplit & FollowFusedSplit tests * Update dag.InferBound & its UT * Add search_task, measure and serialization * Update Serialization UT * Add MetaTileRewritePolicy (neo-ai#5) * Add feature * Add cost_model, meta_tile_rewrite_policy * Add MetaTileRewritePolicy basic UT * Basic Python API for State (neo-ai#6) * Add Basic Python API for State * Add UTs for State * Add Python API: Measure & Task (neo-ai#7) * Update the return value of state operation * Add task * Copy measure.py & utils.py * Fix LocalBuilder * Fix LocalRunner * Add ansor.auto_schedule() API; First AutoSchedule working version(neo-ai#8) * Add basic Python support for ansor.auto_schedule * Update AutoSchedule API * Bug fix for get the attach point of a fused iter * Update UT after infer bug fix * Bug fix & Add python serialization API (neo-ai#10) * Delete C++ UT hack since Python is ready * Add ndarray.non_empty * Update Serialization python API * Improve code style, python wrapper and test cases (neo-ai#11) * Update c++ code style and unit test * Update python State wrapper and test cases * fix unit tests * Add RPCRunner & OpenCL/CUDA test (neo-ai#12) * Add RPCRunner & OpenCL search test * Add CUDA search test * Add RPCRunner test * rebase to upstream/master * Add Ansor basic tutorial (neo-ai#13) * Add basic tutorial * migrate feature extraction (neo-ai#14) * Add XGBModel & RPCRunnerWarpper (neo-ai#15) * Add XGBModel & RPCRunnerWarpper * Revert "Add Parallel Granularity Mutation" * Migrate workload_registry.py (neo-ai#16) * add workload registry * update * update * add task scheduler (neo-ai#17) * Add conv2d cuda tutorial with workload registry (neo-ai#18) * add tune_test.py (the old tune_wkl.py) (neo-ai#19) * add tune_test.py (the old tune_wkl.py) * update * fix measure * fix for gpu * Code refine for tune_test.py & Add a pre load callback (neo-ai#20) * Bug fix for tutorials * Add PreLoadMeasuredStates * Add search_callback support for task tuner * Code refine for tune_test.py * Update * Update * Update * Update * Bug fix * Add python custom sketch rule (neo-ai#21) * Add custom sketch rule * Bug fix * Ansor Relay Integration (without layout rewrite) (neo-ai#22) * relay integration * Add tune_op_subgraph.py & Some code clean for tune_network.py (neo-ai#23) * Add single op tune scripts * Add tune subgraph support * Merge all op & all subgraph to one file * Rename file * add explicit_unroll_max_extent (neo-ai#25) * Add Index simplification & API update (neo-ai#26) * Add vectorized cooperative_fetching test * Update math simplify for vectorized CF * File rename * Update tune_network * API update * Update PreLoadMeasuredStates & Some bug fix (neo-ai#27) * Add a threading wrapper to fix the test bug * Set default TVM_USE_AUTO_SCHEDULER to false * Update PreLoadMeasuredStates callback * Add tensorize step for loop_state (neo-ai#31) * Add tensorize step * State python api update (neo-ai#33) * Start to update api * Add compute_dag to state * API update * kernel layout rewrite (neo-ai#28) * kernel layout rewrite * remove some hacks * add defuse_ops pass and move kernel_layout_rewrite pass after fuse_ops pass * set TVM_RELAY_DISABLE_BUILD_CACHE for task extraction and prepare_layout_rewrite * [cache flush] port cache flush to ansor (neo-ai#32) * Improve relay integration (neo-ai#34) * tmp checkpoint * Improve relay integration * Improve relay integration * Fix xgb error & Simplify dispatcher (neo-ai#35) * Rename "MetaTileRewritePolicy" to "SketchPolicy". (neo-ai#36) * Rename "MetaTileRewritePolicy" to "SketchPolicy". * Add a new class for auto_unroll_max_step, storage_offset in StageNode * fix tune_op_subgraph.py * rebase * Migrate all node::make to noderef's construct function (neo-ai#37) * Start to move xxxnode::make to noderef() * Update * Update * Finish transform_step * Finish comute dag & auto schedule * Update * Update * Update * Update * Update * Code refine * Code refine * Code refine * Update * Update * Some lint fix & Recover the double constructor of tvm::PrimExpr (neo-ai#39) * lint fix * clang-format-fix * pylint fix * Update * Recover the double constructor of tvm::PrimExpr * Fix pylint * pylint fix * pylint fix * Add MutateComputeLocation and MutateParallel in evolutionary search (neo-ai#40) * Add MutateComputeLocation and MutateParallel in evolutionary search * fix lint * Improve loop state python API (stage_tensors -> stage_ops) (neo-ai#41) * improve loop state python API (stage_tensors -> stage_ops) * fix * ComputeDAG bug fix & Add Custom TensorCore Matmul Example (neo-ai#42) * Bug Fix * Sample example of Custom TensorCore Matmul * Rever Commits, Start to build minimum Ansor system * Code clean for minimum Ansor system * Bug fix & Delete AccessAnalyzer * Delete attachmap & Code clean * Doc update Update statenode::stages from vector to Array * Headfile update & Python doc update * clang-format fix * pylint fix * Update * Doc update * Update * Bug fix after code merge to the new master * clang-format fix * Update * Update * Update std::vector to Array; Update verbosity setting; Some commemts addressed * std::vector->Array & std::string->String * Add init_state to ComputeDAG * Update * Update some unordered_map to Map * clang-format fix * Comments addressed Delete ReplayAndInferBound Delete ReplaySteps & InferBoundCommon * Lint fix * Update * Update * Update * Update * Update * Update * Update * Update * Update * Rename ansor namespace to auto_schedule * Update * Rename ThreadPool to ParallelFor * Add parallel_for * Remove ThreadPool * Update python/tvm/auto_schedule/auto_schedule.py * trigger CI Co-authored-by: Lianmin Zheng <lianminzheng@gmail.com> Co-authored-by: Minmin Sun (孙敏敏) <minmin.smm@alibaba-inc.com> Co-authored-by: Zhao Wu <zhaowu@apache.org>
trevor-m
pushed a commit
to trevor-m/tvm
that referenced
this pull request
Sep 2, 2020
…generating (apache#5962) * Code migration Start (neo-ai#1) * Init commit: Code migration Start * Add loop_state.cc/h * Add ComputeDAG basic test * Split transform_step out & Update more UTs (neo-ai#3) * Split transform_step out * Update GetProducers & GetConsumers * Update UTs * Add UT for CacheReadWrite & Some bug fix * Add search_task, measure and serialization (neo-ai#4) * Add FollowSplit & FollowFusedSplit tests * Update dag.InferBound & its UT * Add search_task, measure and serialization * Update Serialization UT * Add MetaTileRewritePolicy (neo-ai#5) * Add feature * Add cost_model, meta_tile_rewrite_policy * Add MetaTileRewritePolicy basic UT * Basic Python API for State (neo-ai#6) * Add Basic Python API for State * Add UTs for State * Add Python API: Measure & Task (neo-ai#7) * Update the return value of state operation * Add task * Copy measure.py & utils.py * Fix LocalBuilder * Fix LocalRunner * Add ansor.auto_schedule() API; First AutoSchedule working version(neo-ai#8) * Add basic Python support for ansor.auto_schedule * Update AutoSchedule API * Bug fix for get the attach point of a fused iter * Update UT after infer bug fix * Bug fix & Add python serialization API (neo-ai#10) * Delete C++ UT hack since Python is ready * Add ndarray.non_empty * Update Serialization python API * Improve code style, python wrapper and test cases (neo-ai#11) * Update c++ code style and unit test * Update python State wrapper and test cases * fix unit tests * Add RPCRunner & OpenCL/CUDA test (neo-ai#12) * Add RPCRunner & OpenCL search test * Add CUDA search test * Add RPCRunner test * rebase to upstream/master * Add Ansor basic tutorial (neo-ai#13) * Add basic tutorial * migrate feature extraction (neo-ai#14) * Add XGBModel & RPCRunnerWarpper (neo-ai#15) * Add XGBModel & RPCRunnerWarpper * Revert "Add Parallel Granularity Mutation" * Migrate workload_registry.py (neo-ai#16) * add workload registry * update * update * add task scheduler (neo-ai#17) * Add conv2d cuda tutorial with workload registry (neo-ai#18) * add tune_test.py (the old tune_wkl.py) (neo-ai#19) * add tune_test.py (the old tune_wkl.py) * update * fix measure * fix for gpu * Code refine for tune_test.py & Add a pre load callback (neo-ai#20) * Bug fix for tutorials * Add PreLoadMeasuredStates * Add search_callback support for task tuner * Code refine for tune_test.py * Update * Update * Update * Update * Bug fix * Add python custom sketch rule (neo-ai#21) * Add custom sketch rule * Bug fix * Ansor Relay Integration (without layout rewrite) (neo-ai#22) * relay integration * Add tune_op_subgraph.py & Some code clean for tune_network.py (neo-ai#23) * Add single op tune scripts * Add tune subgraph support * Merge all op & all subgraph to one file * Rename file * add explicit_unroll_max_extent (neo-ai#25) * Add Index simplification & API update (neo-ai#26) * Add vectorized cooperative_fetching test * Update math simplify for vectorized CF * File rename * Update tune_network * API update * Update PreLoadMeasuredStates & Some bug fix (neo-ai#27) * Add a threading wrapper to fix the test bug * Set default TVM_USE_AUTO_SCHEDULER to false * Update PreLoadMeasuredStates callback * Add tensorize step for loop_state (neo-ai#31) * Add tensorize step * State python api update (neo-ai#33) * Start to update api * Add compute_dag to state * API update * kernel layout rewrite (neo-ai#28) * kernel layout rewrite * remove some hacks * add defuse_ops pass and move kernel_layout_rewrite pass after fuse_ops pass * set TVM_RELAY_DISABLE_BUILD_CACHE for task extraction and prepare_layout_rewrite * [cache flush] port cache flush to ansor (neo-ai#32) * Improve relay integration (neo-ai#34) * tmp checkpoint * Improve relay integration * Improve relay integration * Fix xgb error & Simplify dispatcher (neo-ai#35) * Rename "MetaTileRewritePolicy" to "SketchPolicy". (neo-ai#36) * Rename "MetaTileRewritePolicy" to "SketchPolicy". * Add a new class for auto_unroll_max_step, storage_offset in StageNode * fix tune_op_subgraph.py * rebase * Migrate all node::make to noderef's construct function (neo-ai#37) * Start to move xxxnode::make to noderef() * Update * Update * Finish transform_step * Finish comute dag & auto schedule * Update * Update * Update * Update * Update * Code refine * Code refine * Code refine * Update * Update * Some lint fix & Recover the double constructor of tvm::PrimExpr (neo-ai#39) * lint fix * clang-format-fix * pylint fix * Update * Recover the double constructor of tvm::PrimExpr * Fix pylint * pylint fix * pylint fix * Add MutateComputeLocation and MutateParallel in evolutionary search (neo-ai#40) * Add MutateComputeLocation and MutateParallel in evolutionary search * fix lint * Improve loop state python API (stage_tensors -> stage_ops) (neo-ai#41) * improve loop state python API (stage_tensors -> stage_ops) * fix * ComputeDAG bug fix & Add Custom TensorCore Matmul Example (neo-ai#42) * Bug Fix * Sample example of Custom TensorCore Matmul * Rever Commits, Start to build minimum Ansor system * Code clean for minimum Ansor system * Bug fix & Delete AccessAnalyzer * Delete attachmap & Code clean * Doc update Update statenode::stages from vector to Array * Headfile update & Python doc update * clang-format fix * pylint fix * Update * Doc update * Update * Bug fix after code merge to the new master * clang-format fix * Update * Update * Update std::vector to Array; Update verbosity setting; Some commemts addressed * std::vector->Array & std::string->String * Add init_state to ComputeDAG * Update * Update some unordered_map to Map * clang-format fix * Comments addressed Delete ReplayAndInferBound Delete ReplaySteps & InferBoundCommon * Lint fix * Update * Update * Update * Update * Update * Update * Update * Update * Update * Rename ansor namespace to auto_schedule * Update * Rename ThreadPool to ParallelFor * Add parallel_for * Remove ThreadPool * Update python/tvm/auto_schedule/auto_schedule.py * trigger CI Co-authored-by: Lianmin Zheng <lianminzheng@gmail.com> Co-authored-by: Minmin Sun (孙敏敏) <minmin.smm@alibaba-inc.com> Co-authored-by: Zhao Wu <zhaowu@apache.org>
trevor-m
pushed a commit
that referenced
this pull request
Sep 3, 2020
…generating (apache#5962) * Code migration Start (#1) * Init commit: Code migration Start * Add loop_state.cc/h * Add ComputeDAG basic test * Split transform_step out & Update more UTs (#3) * Split transform_step out * Update GetProducers & GetConsumers * Update UTs * Add UT for CacheReadWrite & Some bug fix * Add search_task, measure and serialization (#4) * Add FollowSplit & FollowFusedSplit tests * Update dag.InferBound & its UT * Add search_task, measure and serialization * Update Serialization UT * Add MetaTileRewritePolicy (#5) * Add feature * Add cost_model, meta_tile_rewrite_policy * Add MetaTileRewritePolicy basic UT * Basic Python API for State (#6) * Add Basic Python API for State * Add UTs for State * Add Python API: Measure & Task (#7) * Update the return value of state operation * Add task * Copy measure.py & utils.py * Fix LocalBuilder * Fix LocalRunner * Add ansor.auto_schedule() API; First AutoSchedule working version(#8) * Add basic Python support for ansor.auto_schedule * Update AutoSchedule API * Bug fix for get the attach point of a fused iter * Update UT after infer bug fix * Bug fix & Add python serialization API (#10) * Delete C++ UT hack since Python is ready * Add ndarray.non_empty * Update Serialization python API * Improve code style, python wrapper and test cases (#11) * Update c++ code style and unit test * Update python State wrapper and test cases * fix unit tests * Add RPCRunner & OpenCL/CUDA test (#12) * Add RPCRunner & OpenCL search test * Add CUDA search test * Add RPCRunner test * rebase to upstream/master * Add Ansor basic tutorial (#13) * Add basic tutorial * migrate feature extraction (#14) * Add XGBModel & RPCRunnerWarpper (#15) * Add XGBModel & RPCRunnerWarpper * Revert "Add Parallel Granularity Mutation" * Migrate workload_registry.py (#16) * add workload registry * update * update * add task scheduler (#17) * Add conv2d cuda tutorial with workload registry (#18) * add tune_test.py (the old tune_wkl.py) (#19) * add tune_test.py (the old tune_wkl.py) * update * fix measure * fix for gpu * Code refine for tune_test.py & Add a pre load callback (#20) * Bug fix for tutorials * Add PreLoadMeasuredStates * Add search_callback support for task tuner * Code refine for tune_test.py * Update * Update * Update * Update * Bug fix * Add python custom sketch rule (#21) * Add custom sketch rule * Bug fix * Ansor Relay Integration (without layout rewrite) (#22) * relay integration * Add tune_op_subgraph.py & Some code clean for tune_network.py (#23) * Add single op tune scripts * Add tune subgraph support * Merge all op & all subgraph to one file * Rename file * add explicit_unroll_max_extent (#25) * Add Index simplification & API update (#26) * Add vectorized cooperative_fetching test * Update math simplify for vectorized CF * File rename * Update tune_network * API update * Update PreLoadMeasuredStates & Some bug fix (#27) * Add a threading wrapper to fix the test bug * Set default TVM_USE_AUTO_SCHEDULER to false * Update PreLoadMeasuredStates callback * Add tensorize step for loop_state (#31) * Add tensorize step * State python api update (#33) * Start to update api * Add compute_dag to state * API update * kernel layout rewrite (#28) * kernel layout rewrite * remove some hacks * add defuse_ops pass and move kernel_layout_rewrite pass after fuse_ops pass * set TVM_RELAY_DISABLE_BUILD_CACHE for task extraction and prepare_layout_rewrite * [cache flush] port cache flush to ansor (#32) * Improve relay integration (#34) * tmp checkpoint * Improve relay integration * Improve relay integration * Fix xgb error & Simplify dispatcher (#35) * Rename "MetaTileRewritePolicy" to "SketchPolicy". (#36) * Rename "MetaTileRewritePolicy" to "SketchPolicy". * Add a new class for auto_unroll_max_step, storage_offset in StageNode * fix tune_op_subgraph.py * rebase * Migrate all node::make to noderef's construct function (#37) * Start to move xxxnode::make to noderef() * Update * Update * Finish transform_step * Finish comute dag & auto schedule * Update * Update * Update * Update * Update * Code refine * Code refine * Code refine * Update * Update * Some lint fix & Recover the double constructor of tvm::PrimExpr (#39) * lint fix * clang-format-fix * pylint fix * Update * Recover the double constructor of tvm::PrimExpr * Fix pylint * pylint fix * pylint fix * Add MutateComputeLocation and MutateParallel in evolutionary search (#40) * Add MutateComputeLocation and MutateParallel in evolutionary search * fix lint * Improve loop state python API (stage_tensors -> stage_ops) (#41) * improve loop state python API (stage_tensors -> stage_ops) * fix * ComputeDAG bug fix & Add Custom TensorCore Matmul Example (#42) * Bug Fix * Sample example of Custom TensorCore Matmul * Rever Commits, Start to build minimum Ansor system * Code clean for minimum Ansor system * Bug fix & Delete AccessAnalyzer * Delete attachmap & Code clean * Doc update Update statenode::stages from vector to Array * Headfile update & Python doc update * clang-format fix * pylint fix * Update * Doc update * Update * Bug fix after code merge to the new master * clang-format fix * Update * Update * Update std::vector to Array; Update verbosity setting; Some commemts addressed * std::vector->Array & std::string->String * Add init_state to ComputeDAG * Update * Update some unordered_map to Map * clang-format fix * Comments addressed Delete ReplayAndInferBound Delete ReplaySteps & InferBoundCommon * Lint fix * Update * Update * Update * Update * Update * Update * Update * Update * Update * Rename ansor namespace to auto_schedule * Update * Rename ThreadPool to ParallelFor * Add parallel_for * Remove ThreadPool * Update python/tvm/auto_schedule/auto_schedule.py * trigger CI Co-authored-by: Lianmin Zheng <lianminzheng@gmail.com> Co-authored-by: Minmin Sun (孙敏敏) <minmin.smm@alibaba-inc.com> Co-authored-by: Zhao Wu <zhaowu@apache.org>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.